General Convergence Analysis of Stochastic First-Order Methods for Composite Optimization

نویسندگان

چکیده

In this paper, we consider stochastic composite convex optimization problems with the objective function satisfying a bounded gradient condition, or without quadratic functional growth property. These models include most well-known classes of functions analyzed in literature: nonsmooth Lipschitz and composition (potentially) smooth function, strong convexity. Based on flexibility offered by our model, several variants first-order methods, such as proximal point algorithms. Usually, convergence theory for these methods has been derived simple restrictive assumptions, rates are general sublinear hold only specific decreasing stepsizes. Hence, analyze constant variable stepsize under assumptions covering large class functions. For stepsize, show that can achieve linear rate up to proportional some condition even pure convergence. Moreover, when is chosen derive methods. Finally, mapping Moreau smoothing introduced present paper lead intuitive proofs.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Fast First-Order Methods for Composite Convex Optimization with Backtracking

We propose new versions of accelerated first order methods for convex composite optimization, where the prox parameter is allowed to increase from one iteration to the next. In particular we show that a full backtracking strategy can be used within the FISTA [1] and FALM algorithms [7] while preserving their worst-case iteration complexities of O( √ L(f)/ ). In the original versions of FISTA an...

متن کامل

Finite Sample Convergence Rates of Zero-Order Stochastic Optimization Methods

• Let Ak denote the set of methods that observe a sequence of data pairs Y t = (F (θ, X ), F (τ , X )), 1 ≤ t ≤ k, and return an estimate θ̂(k) ∈ Θ. • Let FG denote the class of functions we want to optimize, where for each (F, P ) ∈ FG the subgradient g(θ;X) satisfies EP [‖g(θ;X)‖2∗] ≤ G. • For each A ∈ Ak and (F, P ) ∈ FG, consider the optimization gap: k(A, F, P,Θ) := f (θ̂(k))− inf θ∈Θ f (θ) ...

متن کامل

Efficient Methods for Stochastic Composite Optimization

This paper considers an important class of convex programming problems whose objective function Ψ is given by the summation of a smooth and non-smooth component. Further, it is assumed that the only information available for the numerical scheme to solve these problems is the subgradients of Ψ contaminated by stochastic noise. Our contribution mainly consists of the following aspects. Firstly, ...

متن کامل

Stochastic first order methods in smooth convex optimization

In this paper, we are interested in the development of efficient first-order methods for convex optimization problems in the simultaneous presence of smoothness of the objective function and stochasticity in the first-order information. First, we consider the Stochastic Primal Gradient method, which is nothing else but the Mirror Descent SA method applied to a smooth function and we develop new...

متن کامل

Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization

This paper considers a class of constrained stochastic composite optimization problems whose objective function is given by the summation of a differentiable (possibly nonconvex) component, together with a certain non-differentiable (but convex) component. In order to solve these problems, we propose a randomized stochastic projected gradient (RSPG) algorithm, in which proper mini-batch of samp...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of Optimization Theory and Applications

سال: 2021

ISSN: ['0022-3239', '1573-2878']

DOI: https://doi.org/10.1007/s10957-021-01821-2